Introduction
Recognizing 3-D objects from a single 2-D image is one of the most challenging problems in computer vision; it requires solving complex tasks along multiple axes. Humans perform this task effortlessly, and have no problems describing objects in a scene, even if they have never seen these objects before. This is illustrated in Figure 5.1. The first task is to extract a set of features from the image, thus producing descriptions of the image different from an array of pixel values. A second task involves defining a model description, and producing a database of such models. One must then establish correspondences between descriptions of the image and those of the models. The last task consists of learning new objects, and adding their descriptions to the database. If the database is large, then an indexing scheme is required for efficiency.
Although these tasks seem clear and well-defined, no consensus has emerged regarding the choice and level of features (2-D or 3-D), the matching strategy, the type of indexing used, and the order in which these tasks should be performed. Furthermore, it is still not established whether all these tasks are necessary for recognizing objects in images.
The early days of computer vision study were dominated by the dogma of the 2 1/2-D sketch (Marr 1981). Consequently, it was “obvious” that the only way to process an image was to extract features such as edges and regions to infer a description of the visible surfaces, from which 3-D descriptions should be inferred.